machine learning and human value
Amazon.com: The Alignment Problem: Machine Learning and Human Values (Audible Audio Edition): Brian Christian, Brian Christian, Brilliance Audio: Audible Books & Originals
A jaw-dropping exploration of everything that goes wrong when we build AI systems and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us - and to make decisions on our behalf. But alarm bells are ringing. Recent years have seen an eruption of concern as the field of machine learning advances. When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge.
The Alignment Problem: Machine Learning and Human Values: Christian, Brian: 9780393635829: Amazon.com: Books
Finalist for the Los Angeles Times Book Prize A jaw-dropping exploration of everything that goes wrong when we build AI systems and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us―and to make decisions on our behalf. But alarm bells are ringing. Recent years have seen an eruption of concern as the field of machine learning advances. When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge.
- Health & Medicine (0.35)
- Information Technology (0.32)
"The Alignment Problem", Linking Machine Learning And Human Values
I've finished reading "The Alignment Problem" (ISBN: 9780393635829), by Brian Christian. As the subtitle states, it's an attempt to discuss fuzzier aspects of human value with the growing relevance of machine learning (ML). By ML, the author almost exclusively means neural networks. Overall, it was a good book. As with most, though, it missed a few things.